16 research outputs found

    Randomized routing algorithms in mobile ad hoc networks

    Get PDF
    We consider the problem of finding a path from a source to a destination node in a set of mobile wireless nodes. Many solutions to this problem proposed in the literature fall into the position-based routing paradigm, where in each step, the decision of which node to go to next is based only on the position or geographic coordinates of the current node c , its neighboring nodes N ( c ), and the destination node d . We propose several new randomized position-based algorithms for routing in mobile ad hoc networks. Our algorithms combine the greedy heuristic of minimizing the distance remaining to the destination and the directional heuristic of staying close to the direction of the destination with the use of randomization to retain some flexibility in the chosen routes. We classify our randomized algorithms based on the strategy they use to define a subset of neighboring nodes as the candidate nodes. The sector-based algorithms select the candidate nodes from a specified sector, whereas the AB ( above-below ) algorithms choose two candidate nodes, one from above and the other from below the line between the current node and the destination. On convex subdivisions, a sub-class of AB algorithms can be shown to deliver packets to their destinations with probability 1. Our experiments on unit disk graphs, and their associated Yao graphs, Gabriel graphs, and Planarized Local Delaunay Triangulations, show that the delivery rates of all the randomized algorithms we study are significantly better than the deterministic greedy and directional routing algorithms. For some of the algorithms we propose, this improvement comes at the price of only a small deterioration in the stretch factor of the route. Thus, some of our algorithms obtain a good balance between the delivery rate and the stretch factor

    Towards Fair Machine Learning Software: Understanding and Addressing Model Bias Through Counterfactual Thinking

    Full text link
    The increasing use of Machine Learning (ML) software can lead to unfair and unethical decisions, thus fairness bugs in software are becoming a growing concern. Addressing these fairness bugs often involves sacrificing ML performance, such as accuracy. To address this issue, we present a novel counterfactual approach that uses counterfactual thinking to tackle the root causes of bias in ML software. In addition, our approach combines models optimized for both performance and fairness, resulting in an optimal solution in both aspects. We conducted a thorough evaluation of our approach on 10 benchmark tasks using a combination of 5 performance metrics, 3 fairness metrics, and 15 measurement scenarios, all applied to 8 real-world datasets. The conducted extensive evaluations show that the proposed method significantly improves the fairness of ML software while maintaining competitive performance, outperforming state-of-the-art solutions in 84.6% of overall cases based on a recent benchmarking tool

    Preventing Discriminatory Decision-making in Evolving Data Streams

    Full text link
    Bias in machine learning has rightly received significant attention over the last decade. However, most fair machine learning (fair-ML) work to address bias in decision-making systems has focused solely on the offline setting. Despite the wide prevalence of online systems in the real world, work on identifying and correcting bias in the online setting is severely lacking. The unique challenges of the online environment make addressing bias more difficult than in the offline setting. First, Streaming Machine Learning (SML) algorithms must deal with the constantly evolving real-time data stream. Second, they need to adapt to changing data distributions (concept drift) to make accurate predictions on new incoming data. Adding fairness constraints to this already complicated task is not straightforward. In this work, we focus on the challenges of achieving fairness in biased data streams while accounting for the presence of concept drift, accessing one sample at a time. We present Fair Sampling over Stream (FS2FS^2), a novel fair rebalancing approach capable of being integrated with SML classification algorithms. Furthermore, we devise the first unified performance-fairness metric, Fairness Bonded Utility (FBU), to evaluate and compare the trade-off between performance and fairness of different bias mitigation methods efficiently. FBU simplifies the comparison of fairness-performance trade-offs of multiple techniques through one unified and intuitive evaluation, allowing model designers to easily choose a technique. Overall, extensive evaluations show our measures surpass those of other fair online techniques previously reported in the literature

    On the Overheads of Ad Hoc Routing Schemes

    No full text

    Profiling-Based Indoor Localization Schemes

    No full text

    On the Impact of Node Placement and Profile Point Selection on Indoor Localization

    No full text
    We present an indoor localization technique based on RF profiling using the received signal strength (RSS) measurements from a set of pre–selected reference points. We do not attach any interpretative significance to the measurements other than use them to calculate their difference from the measurements of the reference points. We study the performance of our technique in an environment with multiple adjacent rooms and find that it gives better results compared to the application of the k-Nearest Neighbor algorithm that has been used in the literature for the same task. We also study the proposed scheme and two other well–known localization schemes, with respect to the sensitivity of the localization on the number and layout of the reference points, as well as on the number and layout of the deployed fixed points (pegs) from where the measurements are collected. We find that one can achieve good localization performance with either fewer reference points or with fewer pegs as long as their layout is chosen carefully
    corecore